Goto

Collaborating Authors

 visual explanation






b6f8dc086b2d60c5856e4ff517060392-Supplemental.pdf

Neural Information Processing Systems

InEXPAND,weaugmenteachhuman evaluated state to 5 states. To verify 5issufficient, we also experimented with the numbers of augmentations required in each state to get the best performance. AGIL [50] was designed to utilize saliency map collected via human gaze. The network architectures are shown in Figure 1. Hence, we view the output of attention network as the prediction of whether a pixel should be included in a human annotated boundingbox.



VanillaNet: the Power of Minimalism in Deep Learning (Supplementary Material)

Neural Information Processing Systems

Figure 1: Visualization of attention maps of the classified samples by ResNet-50 and V anillaNet-9. Cutmix: Regularization strategy to train strong classifiers with localizable features.



FFAM: Feature Factorization Activation Map for Explanation of 3D Detectors

Neural Information Processing Systems

LiDAR-based 3D object detection has made impressive progress recently, yet most existing models are black-box, lacking interpretability. Previous explanation approaches primarily focus on analyzing image-based models and are not readily applicable to LiDAR-based 3D detectors. In this paper, we propose a feature factorization activation map (FFAM) to generate high-quality visual explanations for 3D detectors. FFAM employs non-negative matrix factorization to generate concept activation maps and subsequently aggregates these maps to obtain a global visual explanation. To achieve object-specific visual explanations, we refine the global visual explanation using the feature gradient of a target object. Additionally, we introduce a voxel upsampling strategy to align the scale between the activation map and input point cloud. We qualitatively and quantitatively analyze FFAM with multiple detectors on several datasets. Experimental results validate the high-quality visual explanations produced by FFAM.


Grid Saliency for Context Explanations of Semantic Segmentation

Neural Information Processing Systems

Recently, there has been a growing interest in developing saliency methods that provide visual explanations of network predictions. Still, the usability of existing methods is limited to image classification models. To overcome this limitation, we extend the existing approaches to generate grid saliencies, which provide spatially coherent visual explanations for (pixel-level) dense prediction networks. As the proposed grid saliency allows to spatially disentangle the object and its context, we specifically explore its potential to produce context explanations for semantic segmentation networks, discovering which context most influences the class predictions inside a target object area. We investigate the effectiveness of grid saliency on a synthetic dataset with an artificially induced bias between objects and their context as well as on the real-world Cityscapes dataset using state-of-the-art segmentation networks. Our results show that grid saliency can be successfully used to provide easily interpretable context explanations and, moreover, can be employed for detecting and localizing contextual biases present in the data.